- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources3
- Resource Type
-
0002000001000000
- More
- Availability
-
03
- Author / Contributor
- Filter by Author / Creator
-
-
Xie, Lulu (3)
-
Chowdhury, Kanchan (2)
-
Guan, Hong (2)
-
Zhou, Lixi (2)
-
Zou, Jia (2)
-
Goel, Rajeev (1)
-
Jia (1)
-
Maciejewski, Ross (1)
-
Nag, Soham (1)
-
Prisby, Jonathan (1)
-
Swamy, Niranjan (1)
-
Wang, Yancheng (1)
-
Xiao, Chaowei (1)
-
Xiao, Xusheng (1)
-
Xiong, Li (1)
-
Yang, Yingzhen (1)
-
Yu, Lei (1)
-
Zou (1)
-
#Tyler Phillips, Kenneth E. (0)
-
#Willis, Ciara (0)
-
- Filter by Editor
-
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
(submitted - in Review for IEEE ICASSP-2024) (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 24, 2026
-
Guan, Hong; Yu, Lei; Zhou, Lixi; Xiong, Li; Chowdhury, Kanchan; Xie, Lulu; Xiao, Xusheng; Zou, Jia (, Proceedings of the ACM on Management of Data)With the growing adoption of privacy-preserving machine learning algorithms, such as Differentially Private Stochastic Gradient Descent (DP-SGD), training or fine-tuning models on private datasets has become increasingly prevalent. This shift has led to the need for models offering varying privacy guarantees and utility levels to satisfy diverse user requirements. Managing numerous versions of large models introduces significant operational challenges, including increased inference latency, higher resource consumption, and elevated costs. Model deduplication is a technique widely used by many model serving and database systems to support high-performance and low-cost inference queries and model diagnosis queries. However, none of the existing model deduplication works has considered privacy, leading to unbounded aggregation of privacy costs for certain deduplicated models and inefficiencies when applied to deduplicate DP-trained models. We formalize the problem of deduplicating DP-trained models for the first time and propose a novel privacy- and accuracy-aware deduplication mechanism to address the problem. We developed a greedy strategy to select and assign base models to target models to minimize storage and privacy costs. When deduplicating a target model, we dynamically schedule accuracy validations and apply the Sparse Vector Technique to reduce the privacy costs associated with private validation data. Compared to baselines, our approach improved the compression ratio by up to 35× for individual models (including large language models and vision transformers). We also observed up to 43× inference speedup due to the reduction of I/O operations.more » « lessFree, publicly-accessible full text available June 17, 2026
-
Xie, Lulu; Wang, Yancheng; Guan, Hong; Nag, Soham; Goel, Rajeev; Swamy, Niranjan; Yang, Yingzhen; Xiao, Chaowei; Prisby, Jonathan; Maciejewski, Ross; et al (, IEEE)Free, publicly-accessible full text available December 15, 2025
An official website of the United States government
